960 research outputs found
Client: Cross-variable Linear Integrated Enhanced Transformer for Multivariate Long-Term Time Series Forecasting
Long-term time series forecasting (LTSF) is a crucial aspect of modern
society, playing a pivotal role in facilitating long-term planning and
developing early warning systems. While many Transformer-based models have
recently been introduced for LTSF, a doubt have been raised regarding the
effectiveness of attention modules in capturing cross-time dependencies. In
this study, we design a mask-series experiment to validate this assumption and
subsequently propose the "Cross-variable Linear Integrated ENhanced Transformer
for Multivariate Long-Term Time Series Forecasting" (Client), an advanced model
that outperforms both traditional Transformer-based models and linear models.
Client employs linear modules to learn trend information and attention modules
to capture cross-variable dependencies. Meanwhile, it simplifies the embedding
and position encoding layers and replaces the decoder module with a projection
layer. Essentially, Client incorporates non-linearity and cross-variable
dependencies, which sets it apart from conventional linear models and
Transformer-based models. Extensive experiments with nine real-world datasets
have confirmed the SOTA performance of Client with the least computation time
and memory consumption compared with the previous Transformer-based models. Our
code is available at https://github.com/daxin007/Client
Exploring Transferability of Multimodal Adversarial Samples for Vision-Language Pre-training Models with Contrastive Learning
Vision-language pre-training models (VLP) are vulnerable, especially to
multimodal adversarial samples, which can be crafted by adding imperceptible
perturbations on both original images and texts. However, under the black-box
setting, there have been no works to explore the transferability of multimodal
adversarial attacks against the VLP models. In this work, we take CLIP as the
surrogate model and propose a gradient-based multimodal attack method to
generate transferable adversarial samples against the VLP models. By applying
the gradient to optimize the adversarial images and adversarial texts
simultaneously, our method can better search for and attack the vulnerable
images and text information pairs. To improve the transferability of the
attack, we utilize contrastive learning including image-text contrastive
learning and intra-modal contrastive learning to have a more generalized
understanding of the underlying data distribution and mitigate the overfitting
of the surrogate model so that the generated multimodal adversarial samples
have a higher transferability for VLP models. Extensive experiments validate
the effectiveness of the proposed method
A COMPARATIVE STUDY ON SOFTWARE PIRACY BETWEEN CHINA AND AMERICA
Software piracy in China has been a serious problem for decades. This paper builds on an existing software piracy model and adds a cultural dimension. We aim to study the differences between the U.S. and Chinese college students on their attitude toward software piracy, perceived punishment, subjective norms, perceived behavioral control and piracy intention. Through the data analysis, we aim to find the key factors that influence the piracy intent, to identify the differences between the Chinese and Americans, and to provide insights to fight piracy in China
- …